Neural Network Training with Second Order Algorithms
نویسنده
چکیده
Second order algorithms are very efficient for neural network training because of their fast convergence. In traditional Implementations of second order algorithms [Hagan and Menhaj 1994], Jacobian matrix is calculated and stored, which may cause memory limitation problems when training large-sized patterns. In this paper, the proposed computation is introduced to solve the memory limitation problem in second order algorithms. The proposed method calculates gradient vector and Hessian matrix directly, without Jacobian matrix storage and multiplication. Memory cost for training is significantly reduced by replacing matrix operations with vector operations. At the same time, training speed is also improved due to the memory reduction. The proposed implementation of second order algorithms can be applied to train basically an unlimited number of patterns.
منابع مشابه
Two Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...
متن کاملComparison of Artificial Neural Network Training Algorithms for Predicting the Weight of Kurdi Sheep using Image Processing
Extended Abstract Introduction and Objective: Due to weakness, the occurrence of unwanted errors, the impact of the environment and exposure to natural events, human always make mistakes in their diagnoses of the environment or different topics, so that different people 's perception of a single and unique event may be very different and be diverse. Nowadays, with the development of image proc...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملTraining Radial Basis Function Neural Network using Stochastic Fractal Search Algorithm to Classify Sonar Dataset
Radial Basis Function Neural Networks (RBF NNs) are one of the most applicable NNs in the classification of real targets. Despite the use of recursive methods and gradient descent for training RBF NNs, classification improper accuracy, failing to local minimum and low-convergence speed are defects of this type of network. In order to overcome these defects, heuristic and meta-heuristic algorith...
متن کامل